skip to main content


Search for: All records

Creators/Authors contains: "Maja J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Observing how infants and mothers coordinate their behaviors can highlight meaningful patterns in early communication and infant development. While dyads often differ in the modalities they use to communicate, especially in the first year of life, it remains unclear how to capture coordination across multiple types of behaviors using existing computational models of interpersonal synchrony. This paper explores Dynamic Mode Decomposition with control (DMDc) as a method of integrating multiple signals from each communicating partner into a model of multimodal behavioral coordination. We used an existing video dataset to track the head pose, arm pose, and vocal fundamental frequency of infants and mothers during the Face-to-Face Still-Face (FFSF) procedure, a validated 3-stage interaction paradigm. For each recorded interaction, we fit both unimodal and multimodal DMDc models to the extracted pose data. The resulting dynamic characteristics of the models were analyzed to evaluate trends in individual behaviors and dyadic processes across infant age and stages of the interactions. Results demonstrate that observed trends in interaction dynamics across stages of the FFSF protocol were stronger and more significant when models incorporated both head and arm pose data, rather than a single behavior modality. Model output showed significant trends across age, identifying changes in infant movement and in the relationship between infant and mother behaviors. Models that included mothers’ audio data demonstrated similar results to those evaluated with pose data, confirming that DMDc can leverage different sets of behavioral signals from each interacting partner. Taken together, our results demonstrate the potential of DMDc toward integrating multiple behavioral signals into the measurement of multimodal interpersonal coordination. 
    more » « less
  2. With their ability to embody users in physically distant spaces, telepresence robots have gained popularity in environments including hospitals, schools, and offices. However, with platforms lacking in individuation and social presence, users often personalize telepresence robots with clothing and accessories to increase their recognizability and sense of embodiment. Toward understanding personalization preferences, as well as perceptions of personalized platforms, we conducted a series of five studies that investigate patterns in personalization of a telepresence robot and evaluate the impacts of common personalizations along five dimensions (robot uniqueness, humanness, pleasantness/unpleasantness, and people's willingness to interact with it). Finding a strong preference for the use of clothing and headwear in Studies 1-2 (N=52), we systematically manipulated a robot's appearance using these items and evaluated the qualitative and quantitative impacts on observer perceptions in Studies 3-4 (N=160). Observing that personalization increased perceptions of uniqueness and humanness, but also decreased positive responding, we then investigated the associations between personalization preferences and perceptions via a fifth study (N=100). Across the five studies, tensions emerged between operators' interest in using wigs and interlocutors' dislike of wigs. This result highlights a need to consider both operator and interlocutor perspectives when personalizing telepresence robots. 
    more » « less
  3. null (Ed.)
  4. As improvements in medicine lower infant mortality rates, more infants with neuromotor challenges survive past birth. The motor, social, and cognitive development of these infants are closely interrelated, and challenges in any of these areas can lead to developmental differences. Thus, analyzing one of these domains - the motion of young infants - can yield insights on developmental progress to help identify individuals who would benefit most from early interventions. In the presented data collection, we gathered day-long inertial motion recordings from N = 12 typically developing (TD) infants and N = 24 infants who were classified as at risk for developmental delays (AR) due to complications at or before birth. As a first research step, we used simple machine learning methods (decision trees, k-nearest neighbors, and support vector machines) to classify infants as TD or AR based on their movement recordings and demographic data. Our next aim was to predict future outcomes for the AR infants using the same simple classifiers trained from the same movement recordings and demographic data. We achieved a 94.4% overall accuracy in classifying infants as TD or AR, and an 89.5% overall accuracy predicting future outcomes for the AR infants. The addition of inertial data was much more important to producing accurate future predictions than identification of current status. This work is an important step toward helping stakeholders to monitor the developmental progress of AR infants and identify infants who may be at the greatest risk for ongoing developmental challenges. 
    more » « less
  5. Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of lowlevel surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions. 
    more » « less